The Variable Structure Learning Automaton Network
نویسندگان
چکیده
منابع مشابه
The Neural Network Pushdown Automaton: Model, Stack and Learning Simulations
In order for neural networks to learn complex languages or grammars, they must have sufficient computational power or resources to recognize or generate such languages. Though many approaches have been discussed, one obvious approach to enhancing the processing power of a recurrent neural network is to couple it with an external stack memory in effect creating a neural network pushdown automata...
متن کاملLearning Bayesian Network Structure using Markov Blanket in K2 Algorithm
A Bayesian network is a graphical model that represents a set of random variables and their causal relationship via a Directed Acyclic Graph (DAG). There are basically two methods used for learning Bayesian network: parameter-learning and structure-learning. One of the most effective structure-learning methods is K2 algorithm. Because the performance of the K2 algorithm depends on node...
متن کاملLearning Deterministic Finite Automaton with a Recurrent Neural Network
We consider the problem of learning a nite automaton with recurrent n e u r a l networks from positive evidence. We train Elman recurrent neural networks with a set of sentences in a language and extract a nite automaton by clustering the states of the trained network. We observe that the generalizations beyond the training set, in the language recognized by the extracted automaton, are due to ...
متن کاملLearning a deterministic finite automaton with a recurrent neural network
We consider the problem of learning a finite automaton with recurrent neural networks, given a training set of sentences in a language. We train Elman recurrent neural networks on the prediction task and study experimentally what these networks learn. We found that the network tends to encode an approximation of the minimum automaton that accepts only the sentences in the training set.
متن کاملLearning Stochastic Logical Automaton
This paper is concerned with algorithms for the logical generalisation of probabilistic temporal models from examples. The algorithms combine logic and probabilistic models through inductive generalisation. The inductive generalisation algorithms consist of three parts. The first part describes the graphical generalisation of state transition models. State transition models are generalised by a...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
ژورنال
عنوان ژورنال: IEEJ Transactions on Electronics, Information and Systems
سال: 1998
ISSN: 0385-4221,1348-8155
DOI: 10.1541/ieejeiss1987.118.3_333